A Shift-Reduce Dependency Parser Based on Reinforcement Learning
نویسندگان
چکیده
منابع مشابه
Two-Phase Shift-Reduce Deterministic Dependency Parser of Chinese
In the Chinese language, a verb may have its dependents on its left, right or on both sides. The ambiguity resolution of right-side dependencies is essential for dependency parsing of sentences with two or more verbs. Previous works on shiftreduce dependency parsers may not guarantee the connectivity of a dependency tree due to their weakness at resolving the right-side dependencies. This paper...
متن کاملA Best-First Probabilistic Shift-Reduce Parser
Recently proposed deterministic classifierbased parsers (Nivre and Scholz, 2004; Sagae and Lavie, 2005; Yamada and Matsumoto, 2003) offer attractive alternatives to generative statistical parsers. Deterministic parsers are fast, efficient, and simple to implement, but generally less accurate than optimal (or nearly optimal) statistical parsers. We present a statistical shift-reduce parser that ...
متن کاملSARDSRN: A Neural Network Shift-Reduce Parser
Simple Recurrent Networks (SRNs) have been widely used in natural language tasks. SARDSRN extends the SRN by explicitly representing the input sequence in a SARDNET self-organizing map. The distributed SRN component leads to good generalization and robust cognitive properties, whereas the SARDNET map provides exact representations of the sentence constituents. This combination allows SARDSRN to...
متن کاملShift-Reduce Dependency DAG Parsing
Most data-driven dependency parsing approaches assume that sentence structure is represented as trees. Although trees have several desirable properties from both computational and linguistic perspectives, the structure of linguistic phenomena that goes beyond shallow syntax often cannot be fully captured by tree representations. We present a parsing approach that is nearly as simple as curren...
متن کاملDependency Parsing with Energy-based Reinforcement Learning
We present a model which integrates dependency parsing with reinforcement learning based on Markov decision process. At each time step, a transition is picked up to construct the dependency tree in terms of the long-run reward. The optimal policy for choosing transitions can be found with the SARSA algorithm. In SARSA, an approximation of the stateaction function can be obtained by calculating ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: DEStech Transactions on Computer Science and Engineering
سال: 2019
ISSN: 2475-8841
DOI: 10.12783/dtcse/cscme2019/32565